1,590 research outputs found

    An easy-to-hard learning paradigm for multiple classes and multiple labels

    Full text link
    Š 2017 Weiwei Liu, Ivor W. Tsang and Klaus-Robert Mßller. Many applications, such as human action recognition and object detection, can be formulated as a multiclass classification problem. One-vs-rest (OVR) is one of the most widely used approaches for multiclass classification due to its simplicity and excellent performance. However, many confusing classes in such applications will degrade its results. For example, hand clap and boxing are two confusing actions. Hand clap is easily misclassified as boxing, and vice versa. Therefore, precisely classifying confusing classes remains a challenging task. To obtain better performance for multiclass classifications that have confusing classes, we first develop a classifier chain model for multiclass classification (CCMC) to transfer class information between classifiers. Then, based on an analysis of our proposed model, we propose an easy-to-hard learning paradigm for multiclass classification to automatically identify easy and hard classes and then use the predictions from simpler classes to help solve harder classes. Similar to CCMC, the classifier chain (CC) model is also proposed by Read et al. (2009) to capture the label dependency for multi-label classification. However, CC does not consider the order of difficulty of the labels and achieves degenerated performance when there are many confusing labels. Therefore, it is non-trivial to learn the appropriate label order for CC. Motivated by our analysis for CCMC, we also propose the easy-to-hard learning paradigm for multi-label classi cation to automatically identify easy and hard labels, and then use the predictions from simpler labels to help solve harder labels. We also demonstrate that our proposed strategy can be successfully applied to a wide range of applications, such as ordinal classi cation and relationship prediction. Extensive empirical studies validate our analysis and the e-ectiveness of our proposed easy-to-hard learning strategies

    Towards a Cure for BCI Illiteracy

    Get PDF
    Brain–Computer Interfaces (BCIs) allow a user to control a computer application by brain activity as acquired, e.g., by EEG. One of the biggest challenges in BCI research is to understand and solve the problem of “BCI Illiteracy”, which is that BCI control does not work for a non-negligible portion of users (estimated 15 to 30%). Here, we investigate the illiteracy problem in BCI systems which are based on the modulation of sensorimotor rhythms. In this paper, a sophisticated adaptation scheme is presented which guides the user from an initial subject-independent classifier that operates on simple features to a subject-optimized state-of-the-art classifier within one session while the user interacts the whole time with the same feedback application. While initial runs use supervised adaptation methods for robust co-adaptive learning of user and machine, final runs use unsupervised adaptation and therefore provide an unbiased measure of BCI performance. Using this approach, which does not involve any offline calibration measurement, good performance was obtained by good BCI participants (also one novice) after 3–6 min of adaptation. More importantly, the use of machine learning techniques allowed users who were unable to achieve successful feedback before to gain significant control over the BCI system. In particular, one participant had no peak of the sensory motor idle rhythm in the beginning of the experiment, but could develop such peak during the course of the session (and use voluntary modulation of its amplitude to control the feedback application)

    Towards Zero Training for Brain-Computer Interfacing

    Get PDF
    Electroencephalogram (EEG) signals are highly subject-specific and vary considerably even between recording sessions of the same user within the same experimental paradigm. This challenges a stable operation of Brain-Computer Interface (BCI) systems. The classical approach is to train users by neurofeedback to produce fixed stereotypical patterns of brain activity. In the machine learning approach, a widely adapted method for dealing with those variances is to record a so called calibration measurement on the beginning of each session in order to optimize spatial filters and classifiers specifically for each subject and each day. This adaptation of the system to the individual brain signature of each user relieves from the need of extensive user training. In this paper we suggest a new method that overcomes the requirement of these time-consuming calibration recordings for long-term BCI users. The method takes advantage of knowledge collected in previous sessions: By a novel technique, prototypical spatial filters are determined which have better generalization properties compared to single-session filters. In particular, they can be used in follow-up sessions without the need to recalibrate the system. This way the calibration periods can be dramatically shortened or even completely omitted for these ‘experienced’ BCI users. The feasibility of our novel approach is demonstrated with a series of online BCI experiments. Although performed without any calibration measurement at all, no loss of classification performance was observed

    Preparation of Ni–YSZ thin and thick films on metallic interconnects as cell supports. Applications as anode for SOFC

    Get PDF
    In this work, we propose the preparation of a duplex anodic layer composed of both a thin (100 nm) and a thick film (10 lm) with Ni–YSZ material. The support of this anode is a metallic substrate, which is the interconnect of the SOFC unit cell. The metallic support limits the temperature of thermal treatment at 800 C to keep a good interconnect mechanical behaviour and to reduce corrosion. We have chosen to elaborate anodic coatings by sol–gel route coupled with dip-coating process, which are low cost techniques and allow working with moderate temperatures. Thin films are obtained by dipping interconnect substrate into a sol, and thick films into an optimized slurry. After thermal treatment at only 800 C, anodic coatings are adherent and homogeneous. Thin films have compact microstructures that confer ceramic protective barrier on metal surface. Further coatings of 10 lm thick are porous and constitute the active anodic material

    Operator theory and function theory in Drury-Arveson space and its quotients

    Full text link
    The Drury-Arveson space Hd2H^2_d, also known as symmetric Fock space or the dd-shift space, is a Hilbert function space that has a natural dd-tuple of operators acting on it, which gives it the structure of a Hilbert module. This survey aims to introduce the Drury-Arveson space, to give a panoramic view of the main operator theoretic and function theoretic aspects of this space, and to describe the universal role that it plays in multivariable operator theory and in Pick interpolation theory.Comment: Final version (to appear in Handbook of Operator Theory); 42 page

    Salience-based selection: attentional capture by distractors less salient than the target

    Get PDF
    Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience

    A critical evaluation of network and pathway based classifiers for outcome prediction in breast cancer

    Get PDF
    Recently, several classifiers that combine primary tumor data, like gene expression data, and secondary data sources, such as protein-protein interaction networks, have been proposed for predicting outcome in breast cancer. In these approaches, new composite features are typically constructed by aggregating the expression levels of several genes. The secondary data sources are employed to guide this aggregation. Although many studies claim that these approaches improve classification performance over single gene classifiers, the gain in performance is difficult to assess. This stems mainly from the fact that different breast cancer data sets and validation procedures are employed to assess the performance. Here we address these issues by employing a large cohort of six breast cancer data sets as benchmark set and by performing an unbiased evaluation of the classification accuracies of the different approaches. Contrary to previous claims, we find that composite feature classifiers do not outperform simple single gene classifiers. We investigate the effect of (1) the number of selected features; (2) the specific gene set from which features are selected; (3) the size of the training set and (4) the heterogeneity of the data set on the performance of composite feature and single gene classifiers. Strikingly, we find that randomization of secondary data sources, which destroys all biological information in these sources, does not result in a deterioration in performance of composite feature classifiers. Finally, we show that when a proper correction for gene set size is performed, the stability of single gene sets is similar to the stability of composite feature sets. Based on these results there is currently no reason to prefer prognostic classifiers based on composite features over single gene classifiers for predicting outcome in breast cancer

    Transport of trace gases via eddy shedding from the Asian summer monsoon anticyclone and associated impacts on ozone heating rates

    Get PDF
    The highly vibrant Asian summer monsoon (ASM) anticyclone plays an important role in efficient transport of Asian tropospheric air masses to the extratropical upper troposphere and lower stratosphere (UTLS). In this paper, we demonstrate long-range transport of Asian trace gases via eddy-shedding events using MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) satellite observations, ERA-Interim reanalysis data and the ECHAM5–HAMMOZ global chemistry-climate model. Model simulations and observations consistently show that Asian boundary layer trace gases are lifted to UTLS altitudes in the monsoon anticyclone and are further transported horizontally eastward and westward by eddies detached from the anticyclone. We present an event of eddy shedding during 1–8 July 2003 and discuss a 1995–2016 climatology of eddy-shedding events. Our analysis indicates that eddies detached from the anticyclone contribute to the transport of Asian trace gases away from the Asian region to the western Pacific (20–30°N, 120–150°E) and western Africa (20–30°N, 0–30°E). Over the last two decades, the estimated frequency of occurrence of eddy-shedding events is  ∼ 68% towards western Africa and  ∼ 25% towards the western Pacific. Model sensitivity experiments considering a 10% reduction in Asian emissions of non-methane volatile organic compounds (NMVOCs) and nitrogen oxides (NOx) were performed with ECHAM5–HAMMOZ to understand the impact of Asian emissions on the UTLS. The model simulations show that transport of Asian emissions due to eddy shedding significantly affects the chemical composition of the upper troposphere ( ∼ 100–400hPa) and lower stratosphere ( ∼ 100–80hPa) over western Africa and the western Pacific. The 10% reduction of NMVOCs and NOx Asian emissions leads to decreases in peroxyacetyl nitrate (PAN) (2%–10% near 200–80hPa), ozone (1%–4.5% near  ∼ 150hPa) and ozone heating rates (0.001–0.004Kday−1 near 300–150hPa) in the upper troposphere over western Africa and the western Pacific
    • …
    corecore